446 research outputs found

    An Integrated Model of Speech to Arm Gestures Mapping in Human-Robot Interaction

    Get PDF
    International audienceIn multimodal human-robot interaction (HRI), the process of communication can be established through verbal, non-verbal, and/or para-verbal cues. The linguistic literature shows that para-verbal and non-verbal communications are naturally synchronized, however the natural mechnisam of this synchronization is still largely unexplored. This research focuses on the relation between non-verbal and para-verbal communication by mapping prosody cues to the corresponding metaphoric arm gestures. Our approach for synthesizing arm gestures uses the coupled hidden Markov models (CHMM), which could be seen as a collection of HMM characterizing the segmented prosodic characteristics' stream and the segmented rotation characteristics' streams of the two arms articulations. Experimental results with Nao robot are reported

    Multimodal Adapted Robot Behavior Synthesis within a Narrative Human-Robot Interaction

    Get PDF
    International audienceIn human-human interaction, three modalities of communication (i.e., verbal, nonverbal, and paraverbal) are naturally coordinated so as to enhance the meaning of the conveyed message. In this paper, we try to create a similar coordination between these modalities of communication in order to make the robot behave as naturally as possible. The proposed system uses a group of videos in order to elicit specific target emotions in a human user, upon which interactive narratives will start (i.e., interactive discussions between the participant and the robot around each video's content). During each interaction experiment, the humanoid expressive ALICE robot engages and generates an adapted multimodal behavior to the emotional content of the projected video using speech, head-arm metaphoric gestures, and/or facial expressions. The interactive speech of the robot is synthesized using Mary-TTS (text to speech toolkit), which is used-in parallel-to generate adapted head-arm gestures [1]. This synthesized multimodal robot behavior is evaluated by the interacting human at the end of each emotion-eliciting experiment. The obtained results validate the positive effect of the generated robot behavior multimodality on interaction

    Towards an Online Fuzzy Modeling for Human Internal States Detection

    Get PDF
    International audienceIn human-robot interaction, a social intelligent robot should be capable of understanding the emotional internal state of the interacting human so as to behave in a proper manner. The main problem towards this approach is that human internal states can't be totally trained on, so the robot should be able to learn and classify emotional states online. This research paper focuses on developing a novel online incremental learning of human emotional states using Takagi-Sugeno (TS) fuzzy model. When new data is present, a decisive criterion decides if the new elements constitute a new cluster or if they confirm one of the previously existing clusters. If the new data is attributed to an existing cluster, the evolving fuzzy rules of the TS model may be updated whether by adding a new rule or by modifying existing rules according to the descriptive potential of the new data elements with respect to the entire existing cluster centers. However, if a new cluster is formed, a corresponding new TS fuzzy model is created and then updated when new data elements get attributed to it. The subtractive clustering algorithm is used to calculate the cluster centers that present the rules of the TS models. Experimental results show the effectiveness of the proposed method

    A Model for Synthesizing a Combined Verbal and Nonverbal Behavior Based on Personality Traits in Human-Robot Interaction

    Get PDF
    International audienceIn Human-Robot Interaction (HRI) scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion-introversion personality dimension. We explore the human-robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported

    Interactive Robot Learning for Multimodal Emotion Recognition

    Get PDF
    International audienceInteraction plays a critical role in skills learning for natural communication. In human-robot interaction (HRI), robots can get feedback during the interaction to improve their social abilities. In this context, we propose an interactive robot learning framework using mul-timodal data from thermal facial images and human gait data for online emotion recognition. We also propose a new decision-level fusion method for the multimodal classification using Random Forest (RF) model. Our hybrid online emotion recognition model focuses on the detection of four human emotions (i.e., neutral, happiness, angry, and sadness). After conducting offline training and testing with the hybrid model, the accuracy of the online emotion recognition system is more than 10% lower than the offline one. In order to improve our system, the human verbal feedback is injected into the robot interactive learning. With the new online emotion recognition system, a 12.5% accuracy increase compared with the online system without interactive robot learning is obtained

    An Online Fuzzy-Based Approach for Human Emotions Detection: An Overview on the Human Cognitive Model of Understanding and Generating Multimodal Actions

    Get PDF
    International audienceAn intelligent robot needs to be able to understand human emotions, and to understand and generate actions through cognitive systems that operate in a similar way to human cognition. In this chapter, we mainly focus on developing an online incremental learning system of emotions using Takagi-Sugeno (TS) fuzzy model. Additionally, we present a general overview for understanding and generating multimodal actions from the cognitive point of view. The main objective of this system is to detect whether the observed emotion needs a new corresponding multi-modal action to be generated in case it constitutes a new emotion cluster not learnt before, or it can be attributed to one of the existing actions in memory in case it belongs to an existing cluster

    Prosody-Based Adaptive Metaphoric Head and Arm Gestures Synthesis in Human Robot Interaction

    Get PDF
    International audienceIn human-human interaction, the process of communication can be established through three modalities: verbal, non-verbal (i.e., gestures), and/or para-verbal (i.e., prosody). The linguistic literature shows that the para-verbal and non-verbal cues are naturally aligned and synchronized, however the natural mechanism of this synchronization is still unexplored. The difficulty encountered during the coordination between prosody and metaphoric head-arm gestures concerns the conveyed meaning , the way of performing gestures with respect to prosodic characteristics, their relative temporal arrangement, and their coordinated organization in the phrasal structure of utterance. In this research, we focus on the mechanism of mapping between head-arm gestures and speech prosodic characteristics in order to generate an adaptive robot behavior to the interacting human's emotional state. Prosody patterns and the motion curves of head-arm gestures are aligned separately into parallel Hidden Markov Models (HMM). The mapping between speech and head-arm gestures is based on the Coupled Hidden Markov Models (CHMM), which could be seen as a multi-stream collection of HMM, characterizing the segmented prosody and head-arm gestures' data. An emotional state based audio-video database has been created for the validation of this study. The obtained results show the effectiveness of the proposed methodology

    Special Issue on the Grand Challenges of Robotics

    Get PDF

    Multi-resolution SLAM for Real World Navigation

    Get PDF
    In this paper a hierarchical multi-resolution approach allowing for high precision and distinctiveness is presented. The method combines topological and metric paradigm. The metric approach, based on the Kalman Filter, uses a new concept to avoid the problem of the drift in odometry. For the topological framework the fingerprint sequence approach is used. During the construction of the topological map, a communication between the two paradigms is established. The fingerprint used for topological navigation enables also the re-initialization of the metric localization. The experimentation section will validate the multi-resolution-representation maps approach and presents different steps of the method
    • …
    corecore